Opus: Heterogeneous Computing with Data Parallel Tasks

نویسندگان

  • Erwin Laure
  • Piyush Mehrotra
  • Hans P. Zima
چکیده

The coordination language Opus is an object-based extension of High Performance Fortran (HPF) that supports the integration of coarse-grain task parallelism with HPF-style data parallelism. In this paper we discuss Opus in the context of multidisciplinary applications (MDAs) which execute in a heterogeneous environment. After outlining the major properties of such applications and a number of diierent approaches towards providing language and tool support for MDAs we describe the salient features of Opus and its implementation, emphasizing the issues related to the coordination of data-parallel HPF programs in a heterogeneous environment.

برای دانلود رایگان متن کامل این مقاله و بیش از 32 میلیون مقاله دیگر ابتدا ثبت نام کنید

ثبت نام

اگر عضو سایت هستید لطفا وارد حساب کاربری خود شوید

منابع مشابه

Green Energy-aware task scheduling using the DVFS technique in Cloud Computing

Nowdays, energy consumption as a critical issue in distributed computing systems with high performance has become so green computing tries to energy consumption, carbon footprint and CO2 emissions in high performance computing systems (HPCs) such as clusters, Grid and Cloud that a large number of parallel. Reducing energy consumption for high end computing can bring various benefits such as red...

متن کامل

Opus: A Coordination Language for Multidisciplinary Applications

Data parallel languages, such as High Performance Fortran, can be successfully applied to a wide range of numerical applications. However, many advanced scienti c and engineering applications are multidisciplinary and heterogeneous in nature, and thus do not t well into the data parallel paradigm. In this paper we present Opus, a language designed to ll this gap. The central concept of Opus is ...

متن کامل

High Level Support for Distributed High Performance Computing Fakultt at F Ur Wirtschaftswissenschaften Und Informatik Universitt at Wien

Recent trends in hardware, in particular in interconnection technologies, have paved the way to the exploitation of heterogeneous, distributed computing platforms for advanced scienti c applications. This infrastructure enables the building of meta-applications that are composed of several modules which may be implemented in di erent languages, exploit heterogeneous platforms, and employ severa...

متن کامل

Parallel computing using MPI and OpenMP on self-configured platform, UMZHPC.

Parallel computing is a topic of interest for a broad scientific community since it facilitates many time-consuming algorithms in different application domains.In this paper, we introduce a novel platform for parallel computing by using MPI and OpenMP programming languages based on set of networked PCs. UMZHPC is a free Linux-based parallel computing infrastructure that has been developed to cr...

متن کامل

A Java Framework for Distributed High Performance Computing

The past few years have dramatically changed the view of high performance applications and computing. While traditionally such applications have been targeted towards dedicated parallel machines, we see the emerging trend of building \meta-applications" composed of several modules that exploit heterogeneous platforms and employ hybrid forms of parallelism. In particular, Java has been recognize...

متن کامل

ذخیره در منابع من


  با ذخیره ی این منبع در منابع من، دسترسی به آن را برای استفاده های بعدی آسان تر کنید

عنوان ژورنال:
  • Parallel Processing Letters

دوره 9  شماره 

صفحات  -

تاریخ انتشار 1999